Demo Fortran90 MLP Backprop Code

NOTE: The following is what I wrote during my Doctorate and has not been modified since. Please treat as such!!
To copy the code you can right click in the text area.

Fotran 90 is ideal for demonstrating neural networks because of the intrinsic matrix multiplication function MATMUL. This one function overcomes a lot of DO LOOPS that are required to achieve the same results in other computer languages. The part that people struggle on, the weight update algorithm, can be seen to be literally only two lines of code - and it really is that simple!

The following source code is for a multilayer perceptron trained with the original backpropagation algorithm. The weights are updated after the presentation of each pattern. The hidden neurons have tanh activation functions and there is one output neuron that has a linear activation function. Hidden and output layer bias inputs of 1 are created. There are two individual learning rates for each layer of weights. The patterns are presented in a random order and an epoch said to have occured when as many patterns have been presented as there are in the training set. On completion of each epoch it is decided whether to accept the new weights depending if the cost function has been lowered. This procedure could be done after each pattern presentation rather than after each epoch.

The code is written in Fortran 90. All undeclared variables are real unless the variable name begins with the letters I-N in which case the are integers. The routine for setting the random seed is specific to the ftn90 compiler for pc’s by NAG/Salford software, otherwise it should work on any Fortran 90 compiler. In the code anything following a ‘!’ is a comment.

A picture of the neuron numbering convention is at the bottom of the page.

F90 Code

Pattern File
The following is the contents of the demonstration pattern file called ‘ave.pat’. The output (4th number on line) is the average of the 3 inputs. The first line is the number of patterns and the number of inputs.

NOTE - the f90 code does not normalise the data. The data in the pattern file should thus be normalised to lie in the range (-1 to 1) or (0 to 1).

Neuron Naming Convention